AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
4-bit quantization compression

# 4-bit quantization compression

Devstral Small 2507 4bit DWQ
Apache-2.0
This is a 4-bit quantized language model based on the MLX format, supporting various multilingual text generation tasks.
Large Language Model Supports Multiple Languages
D
mlx-community
159
7
Molmo 7B D Bnb 4bit
Apache-2.0
Molmo-7B-D is a large language model quantized with BnB 4-bit. The model size is reduced from 30GB to 7GB, and the video memory requirement is reduced to about 12GB.
Large Language Model Transformers
M
cyan2k
1,994
17
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase